15 research outputs found

    Adversarially Robust Distillation

    Full text link
    Knowledge distillation is effective for producing small, high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks. This paper studies how adversarial robustness transfers from teacher to student during knowledge distillation. We find that a large amount of robustness may be inherited by the student even when distilled on only clean images. Second, we introduce Adversarially Robust Distillation (ARD) for distilling robustness onto student networks. In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student. In our experiments, we find that ARD student models decisively outperform adversarially trained networks of identical architecture in terms of robust accuracy, surpassing state-of-the-art methods on standard robustness benchmarks. Finally, we adapt recent fast adversarial training methods to ARD for accelerated robust distillation.Comment: Accepted to AAAI Conference on Artificial Intelligence, 202

    Teacher Prompting for Federated Hotword Training

    Get PDF
    Federated hotword training enables developing high quality models on real-world user data that is kept entirely on-device. However, such training relies on existing teacher models which limits the quality of the synthetic labels provided to the student model during federated training. This disclosure describes techniques that use a feature-wise linear modulation method to incorporate utterance-level label prompt as an input for federated hotword training by modulating intermediate layer output. Such a model, when trained on central data, can be used as a teacher for federated training that takes place on user devices. The feature-wise modulation layer has the ability to receive utterance-level label prompts, which can be used for training the teacher models centrally. As a result, the teacher model is trained to associate the utterance-level signal with the correct frame-level activation pattern during central training. Such a model can then be deployed as a teacher for federated training. During federated training, on-device signals correlated with utterance-level labels such as output of on-device ASR models, binary classifiers, metadata, etc. are leveraged or improved teacher performance
    corecore